Goto

Collaborating Authors

 bias and discrimination


taz2024full: Analysing German Newspapers for Gender Bias and Discrimination across Decades

Urchs, Stefanie, Thurner, Veronika, Aßenmacher, Matthias, Heumann, Christian, Thiemichen, Stephanie

arXiv.org Artificial Intelligence

Open-access corpora are essential for advancing natural language processing (NLP) and computational social science (CSS). However, large-scale resources for German remain limited, restricting research on linguistic trends and societal issues such as gender bias. We present taz2024full, the largest publicly available corpus of German newspaper articles to date, comprising over 1.8 million texts from taz, spanning 1980 to 2024. As a demonstration of the corpus's utility for bias and discrimination research, we analyse gender representation across four decades of reporting. We find a consistent overrepresentation of men, but also a gradual shift toward more balanced coverage in recent years. Using a scalable, structured analysis pipeline, we provide a foundation for studying actor mentions, sentiment, and linguistic framing in German journalistic texts. The corpus supports a wide range of applications, from diachronic language analysis to critical media studies, and is freely available to foster inclusive and reproducible research in German-language NLP.


Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review

Li, Ming, Enkhtur, Ariunaa, Yamamoto, Beverley Anne, Cheng, Fei

arXiv.org Artificial Intelligence

ChatGPT and other Generative Artificial Intelligence (GAI) models tend to inherit and even amplify prevailing societal biases as they are trained on large amounts of existing data. Given the increasing usage of ChatGPT and other GAI by students, faculty members, and staff in higher education institutions (HEIs), there is an urgent need to examine the ethical issues involved such as its potential biases. In this scoping review, we clarify the ways in which biases related to GAI in higher education settings have been discussed in recent academic publications and identify what type of potential biases are commonly reported in this body of literature. We searched for academic articles written in English, Chinese, and Japanese across four main databases concerned with GAI usage in higher education and bias. Our findings show that while there is an awareness of potential biases around large language models (LLMs) and GAI, the majority of articles touch on ``bias'' at a relatively superficial level. Few identify what types of bias may occur under what circumstances. Neither do they discuss the possible implications for the higher education, staff, faculty members, or students. There is a notable lack of empirical work at this point, and we call for higher education researchers and AI experts to conduct more research in this area.


Viewpoint: Regulatory Interest in Big Data, AI More Than a Carrier Problem - Carrier Management

#artificialintelligence

The California Insurance Commissioner and the California Department of Insurance (CDI) recently issued a bulletin regarding industry bias and discrimination. The bulletin acknowledged allegations of bias and discrimination in the industry and gave notice to insurance players that the CDI is watching and that "bias and discrimination in any form will be investigated and will not be tolerated." The bulletin is addressed to "All Admitted and Non-Admitted Insurance Companies, Licensees, and Other Interested Parties" -- clearly intending to cause awareness and attention beyond the carrier ecosystem. So, what does this mean? California has been a leader in following Europe regarding consumer protection laws.


What Is Responsible Artificial Intelligence?

#artificialintelligence

Artificial Intelligence (AI) has revolutionized the way we live. Along with the growing influence of algorithms in how business is organized, these new technologies are impacting our personal decisions regarding where we travel, what we buy, read, or which music we listen to. Given AI's prevalence as an increasingly powerful technology, it is important that we trust it to be a source of good for our society. Yet, the issue of inherent bias and discrimination present in the data built into AI has been widely documented. Experts from Warwick Business School (WBS) have been working on finding the source of such bias and how to minimize it.


White House Unveils Artificial Intelligence 'Bill Of Rights' - AI Summary

#artificialintelligence

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people's personal data and limit surveillance. Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. "If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions," said Nelson, who also serves as deputy assistant to President Joe Biden. The white paper also did not specifically address AI-powered technologies funded through the Department of Justice, whose civil rights division separately has been examining algorithmic harms, bias and discrimination, Nelson said. Tucked between the calls for greater oversight, the white paper also said when appropriately implemented, AI systems have the power to bring about lasting benefits to society, such as helping farmers grow food more efficiently or identifying diseases.


Artificial intelligence suffers from some very human flaws. Gender bias is one

#artificialintelligence

Last month, Facebook parent Meta unveiled an artificial intelligence chatbot said to be its most advanced yet. BlenderBot 3, as the AI is known, is able to search the internet to talk to people about almost anything, and it has abilities related to personality, empathy, knowledge and long-term memory. BlenderBot 3 is also good at peddling anti-Semitic conspiracy theories, claiming that former US President Donald Trump won the 2020 election, and calling Meta Chairman and Facebook co-founder Mark Zuckerberg "creepy". It's not the first time an AI has gone rogue. In 2016, Microsoft's Tay AI took less than 24 hours to morph into a rightwing bigot on Twitter, posting racist and misogynistic tweets and praising Adolf Hitler.


Artificial Intelligence (AI) and the Risk of Bias in Recruitment Decisions

#artificialintelligence

As part of the UK data protection authority's new three-year strategy (ICO25), launched on 14 July, UK Information Commissioner John Edwards announced an investigation into the use of AI systems in recruitment. The investigation will have a particular focus on the potential for bias and discrimination stemming from the algorithms and training data underpinning AI systems used to sift recruitment applications. A key concern is that training data could be negatively impacting the employment opportunities of those from diverse backgrounds. Bias is a particular risk in AI or machine learning systems designed not to solve a problem by following a set of rules, but instead to "learn" from examples of what the solution looks like. If the data sets used to provide those examples have bias built in, then an AI system is likely to replicate and amplify that bias.


Should the Federal Government Regulate Artificial Intelligence?

#artificialintelligence

WASHINGTON, July 12, 2022 – Representatives from academia and a nonprofit diverged at a Bipartisan Policy Center event Tuesday about whether the government should step in and minimize problems associated with artificial intelligence, including bias and discrimination in algorithms. "We really do want actors to help us establish national and international guidelines," said Miriam Vogel, president, and CEO of EqualAI, a nonprofit that seeks to reduce bias in AI. "We are driving full speed without lanes, without speed limits to manage the expectations." While acknowledging the benefits of AI in society today, Vogel said its algorithms present risk that often leads to bias and discrimination. She shared the example of how facial recognition misses certain voices or skin tones. AI is used in various sectors and powers algorithms that cater services to individuals.


The EU Artificial Intelligence Act - recent updates

#artificialintelligence

The European Parliament's Legal Affairs (JURI) Committee, one of the 20 standing committees made up of a number of Members of the European Parliament, recently held a session discussing the EU Artificial Intelligence Act ("AI Act"). Here, we highlight key'thinking points' discussed to give an indication of where the AI Act may change from its current draft. The session was short, so potential answers will be the subject of further debate. For the background on the European Commission's proposed AI Act, see our articles "Artificial intelligence - EU Commission publishes proposed regulations" and "EU Artificial Intelligence Act - what has happened so far and what to expect next". AI has the potential to bring many benefits to users and wider society.


Oh great -- AI can not only be racist and sexist, but ageist too

#artificialintelligence

We have accepted the use of artificial intelligence (AI) in complex processes -- from health care to our daily use of social media -- often without critical investigation, until it is too late. The use of AI is inescapable in our modern society, and it may perpetuate discrimination without its users being aware of any prejudice. When health-care providers rely on biased technology, there are real and harmful impacts. This became clear recently when a study showed that pulse oximeters -- which measure the amount of oxygen in the blood and have been an essential tool for clinical management of COVID-19 -- are less accurate on people with darker skin than lighter skin. The findings resulted in a sweeping racial bias review now underway, in an attempt to create international standards for testing medical devices.